Goto

Collaborating Authors

 intelligent behaviour


AI-as-exploration: Navigating intelligence space

Mollo, Dimitri Coelho

arXiv.org Artificial Intelligence

Artificial Intelligence is a field that lives many lives, and the term has come to encompass a motley collection of scientific and commercial endeavours. In this paper, I articulate the contours of a rather neglected but central scientific role that AI has to play, which I dub `AI-as-exploration'.The basic thrust of AI-as-exploration is that of creating and studying systems that can reveal candidate building blocks of intelligence that may differ from the forms of human and animal intelligence we are familiar with. In other words, I suggest that AI is one of the best tools we have for exploring intelligence space, namely the space of possible intelligent systems. I illustrate the value of AI-as-exploration by focusing on a specific case study, i.e., recent work on the capacity to combine novel and invented concepts in humans and Large Language Models. I show that the latter, despite showing human-level accuracy in such a task, most probably solve it in ways radically different, but no less relevant to intelligence research, to those hypothesised for humans.


Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability

Grzankowski, Alex

arXiv.org Artificial Intelligence

The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, !Blackbox Interpretability"#is wrongheaded. But there is a better way. There is an exciting and emerging discipline of !Inner Interpretability"#(and specifically Mechanistic Interpretability) that aims to uncover the internal activations and weights of models in order to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can#t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists.


Institutional Metaphors for Designing Large-Scale Distributed AI versus AI Techniques for Running Institutions

Boer, Alexander, Sileno, Giovanni

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) started out with an ambition to reproduce the human mind, but, as the sheer scale of that ambition became manifest, it quickly retreated into either studying specialized intelligent behaviours, or proposing over-arching architectural concepts for interfacing specialized intelligent behaviour components, conceived of as agents in a kind of organization. This agent-based modeling paradigm, in turn, proves to have interesting applications in understanding, simulating, and predicting the behaviour of social and legal structures on an aggregate level. For these reasons, this chapter examines a number of relevant cross-cutting concerns, conceptualizations, modeling problems and design challenges in large-scale distributed Artificial Intelligence, as well as in institutional systems, and identifies potential grounds for novel advances.


The Natural Roots of Artificial Intelligence

#artificialintelligence

To begin our exploration of AI we start with defining intelligence. The Oxford Universal Dictionary (1955) leans heavily on the word's Latin root, intelligere (to understand), defining intelligence as [1] the faculty of understanding; intellect; and [2] understanding as a quality admitting of degree; spec. While this focus on "understanding" does highlight the capacity to perceive meaning, it is overbroad; we need to look further for more clarity. Within academic circles, researchers do not have a universally shared definition of intelligence. Broadly speaking, there are four major viewpoints on intelligence that carry over to artificial intelligence research -- each with proponents and critics.


Evolving Self-supervised Neural Networks: Autonomous Intelligence from Evolved Self-teaching

Le, Nam

arXiv.org Artificial Intelligence

This paper presents a technique called evolving self-supervised neural networks - neural networks that can teach themselves, intrinsically motivated, without external supervision or reward. The proposed method presents some sort-of paradigm shift, and differs greatly from both traditional gradient-based learning and evolutionary algorithms in that it combines the metaphor of evolution and learning, more specifically self-learning, together, rather than treating these phenomena alternatively. I simulate a multi-agent system in which neural networks are used to control autonomous foraging agents with little domain knowledge. Experimental results show that only evolved self-supervised agents can demonstrate some sort of intelligent behaviour, but not evolution or self-learning alone. Indications for future work on evolving intelligence are also presented.


Is coding a relevant metaphor for building AI? A commentary on "Is coding a relevant metaphor for the brain?", by Romain Brette

Santoro, Adam, Hill, Felix, Barrett, David, Raposo, David, Botvinick, Matthew, Lillicrap, Timothy

arXiv.org Artificial Intelligence

Is coding a relevant metaphor for building AI? A commentary on "Is coding a relevant metaphor for the brain?", by Romain Brette Abstract Brette contends that the neural coding metaphor is an invalid basis for theories of what the brain does (Brette, 2019). Here, we argue that it is an insufficient guide for building an artificial intelligence (AI) that learns to accomplish short-and long-term goals in a complex, changing environment. The goal of neuroscience is to explain how the brain enables intelligent behaviour, while the goal of agent-based AI is to build agents that behave intelligently. Neuroscience, Brette attests, has suffered from an exaggerated (and technically inaccurate) concern for the codes transmitted by particular parts of the brain.


I Robot, Your Companion

AITopics Original Links

The concept of a cognitive robotic companion inspires some of the best science fiction but one day may be science fact following the work of the four-year COGNIRON project funded since January 2004 by the IST's Future and Emerging Technologies initiative. But what could a cognitive robot companion do? The example that's often used is a robot that's able to fulfil your needs, like passing you a drink or helping in everyday tasks," says Dr Raja Chatila, research director at the Systems Architecture and Analysis Laboratory of the French Centre National de la Recherche Scientifique (LAAS-CNRS), and COGNIRON project coordinator. "That might seem a bit trivial, but let me ask you a question: In the 1970s, what was the use of a personal computer?" he asks. In fact, it was then impossible to imagine how PCs would change the world's economics, politics and society in just 30 years. The eventual uses, once the technology developed, were far from trivial. COGNIRON set out on the same principle, ...


The road to artificial intelligence: A case of data over theory

#artificialintelligence

IN the summer of 1956, a remarkable collection of scientists and engineers gathered at Dartmouth College in Hanover, New Hampshire. Among them were computer scientist Marvin Minsky, information theorist Claude Shannon and two future Nobel prizewinners, Herbert Simon and John Nash. Their task: to spend the summer months inventing a new field of science called "artificial intelligence" (AI). They did not lack in ambition, writing in their funding application: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Their wish list was "to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves". They thought that "a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer."


The Winograd Schema Challenge

Levesque, Hector (University of Toronto) | Davis, Ernest (New York University) | Morgenstern, Leora (SAIC)

AAAI Conferences

In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. A Winograd schema is a pair of sentences that differ only in one or two words and that contain a referential ambiguity that is resolved in opposite directions in the two sentences. We have compiled a collection of Winograd schemas, designed so that the correct answer is obvious to the human reader, but cannot easily be found using selectional restrictions or statistical techniques over text corpora. A contestant in the Winograd Schema Challenge is presented with a collection of one sentence from each pair, and required to achieve human-level accuracy in choosing the correct disambiguation.


The Winograd Schema Challenge

Levesque, Hector J. (University of Toronto)

AAAI Conferences

In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. Like the original, it involves responding to typed English sentences, and English-speaking adults will have no difficulty with it. Unlike the original, the subject is not required to engage in a conversation and fool an interrogator into believing she is dealing with a person. Moreover, the test is arranged in such a way that having full access to a large corpus of English text might not help much. Finally, the interrogator or a third party will be able to decide unambiguously after a few minutes whether or not a subject has passed the test.